Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
2.
Biomed Opt Express ; 15(2): 772-788, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38404298

RESUMO

Regenerative therapies show promise in reversing sight loss caused by degenerative eye diseases. Their precise subretinal delivery can be facilitated by robotic systems alongside with Intra-operative Optical Coherence Tomography (iOCT). However, iOCT's real-time retinal layer information is compromised by inferior image quality. To address this limitation, we introduce an unpaired video super-resolution methodology for iOCT quality enhancement. A recurrent network is proposed to leverage temporal information from iOCT sequences, and spatial information from pre-operatively acquired OCT images. Additionally, a patchwise contrastive loss enables unpaired super-resolution. Extensive quantitative analysis demonstrates that our approach outperforms existing state-of-the-art iOCT super-resolution models. Furthermore, ablation studies showcase the importance of temporal aggregation and contrastive loss in elevating iOCT quality. A qualitative study involving expert clinicians also confirms this improvement. The comprehensive evaluation demonstrates our method's potential to enhance the iOCT image quality, thereby facilitating successful guidance for regenerative therapies.

3.
Sci Rep ; 14(1): 1024, 2024 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-38200135

RESUMO

Scalar translocation is a severe form of intra-cochlear trauma during cochlear implant (CI) electrode insertion. This study explored the hypothesis that the dimensions of the cochlear basal turn and orientation of its inferior segment relative to surgically relevant anatomical structures influence the scalar translocation rates of a pre-curved CI electrode. In a cohort of 40 patients implanted with the Advanced Bionics Mid-Scala electrode array, the scalar translocation group (40%) had a significantly smaller mean distance A of the cochlear basal turn (p < 0.001) and wider horizontal angle between the inferior segment of the cochlear basal turn and the mastoid facial nerve (p = 0.040). A logistic regression model incorporating distance A (p = 0.003) and horizontal facial nerve angle (p = 0.017) explained 44.0-59.9% of the variance in scalar translocation and correctly classified 82.5% of cases. Every 1mm decrease in distance A was associated with a 99.2% increase in odds of translocation [95% confidence interval 80.3%, 100%], whilst every 1-degree increase in the horizontal facial nerve angle was associated with an 18.1% increase in odds of translocation [95% CI 3.0%, 35.5%]. The study findings provide an evidence-based argument for the development of a navigation system for optimal angulation of electrode insertion during CI surgery to reduce intra-cochlear trauma.


Assuntos
Implante Coclear , Implantes Cocleares , Traumatismos Craniocerebrais , Humanos , Cóclea/cirurgia , Eletrodos Implantados , Biônica , Translocação Genética
4.
Int J Comput Assist Radiol Surg ; 19(2): 191-198, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37354219

RESUMO

PURPOSE: Robot-assisted vitreoretinal surgery provides precise and consistent operations on the back of the eye. To perform this safely, knowledge of the surgical instrument's remote centre of motion (RCM) and the location of the insertion point into the eye (trocar) is required. This enables the robot to align both positions to pivot the instrument about the trocar, thus preventing any damaging lateral forces from being exerted. METHODS: Building on a system developed in previous work, this study presents a trocar localisation method that uses a micro-camera mounted on a vitreoretinal surgical forceps, to track two ArUco markers attached on either side of a trocar. The trocar position is the estimated midpoint between the markers. RESULTS: Experimental evaluation of the trocar localisation was conducted. Results showed an RMSE of 1.82 mm for the localisation of the markers and an RMSE of 1.24 mm for the trocar localisation. CONCLUSIONS: The proposed camera-based trocar localisation presents reasonable consistency and accuracy and shows improved results compared to other current methods. Optimum accuracy for this application would necessitate a 1.4 mm absolute error margin, which corresponds to the trocar's radius. The trocar localisation results are successfully found within this margin, yet the marker localisation would require further refinement to ensure consistency of localisation within the error margin. Further work will refine these position estimates and ensure the error stays consistently within this boundary.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgia Vitreorretiniana , Humanos , Movimento (Física) , Instrumentos Cirúrgicos
5.
Front Robot AI ; 10: 1094114, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37779576

RESUMO

Soft robot's natural dynamics calls for the development of tailored modeling techniques for control. However, the high-dimensional configuration space of the geometrically exact modeling approaches for soft robots, i.e., Cosserat rod and Finite Element Methods (FEM), has been identified as a key obstacle in controller design. To address this challenge, Reduced Order Modeling (ROM), i.e., the approximation of the full-order models, and Model Order Reduction (MOR), i.e., reducing the state space dimension of a high fidelity FEM-based model, are enjoying extensive research. Although both techniques serve a similar purpose and their terms have been used interchangeably in the literature, they are different in their assumptions and implementation. This review paper provides the first in-depth survey of ROM and MOR techniques in the continuum and soft robotics landscape to aid Soft Robotics researchers in selecting computationally efficient models for their specific tasks.

6.
Int J Comput Assist Radiol Surg ; 18(11): 1977-1986, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37460915

RESUMO

PURPOSE: The use of robotics is emerging for performing interventional radiology procedures. Robots in interventional radiology are typically controlled using button presses and joystick movements. This study identified how different human-robot interfaces affect endovascular surgical performance using interventional radiology simulations. METHODS: Nine participants performed a navigation task on an interventional radiology simulator with three different human-computer interfaces. Using Simulation Open Framework Architecture we developed a simulation profile of vessels, catheters and guidewires. We designed and manufactured a bespoke haptic interventional radiology controller for robotic systems to control the simulation. Metrics including time taken for navigation, number of incorrect catheterisations, number of catheter and guidewire prolapses and forces applied to vessel walls were measured and used to characterise the interfaces. Finally, participants responded to a questionnaire to evaluate the perception of the controllers. RESULTS: Time taken for navigation, number of incorrect catheterisations and the number of catheter and guidewire prolapses, showed that the device-mimicking controller is better suited for controlling interventional neuroradiology procedures over joystick control approaches. Qualitative metrics also showed that interventional radiologists prefer a device-mimicking controller approach over a joystick approach. CONCLUSION: Of the four metrics used to compare and contrast the human-robot interfaces, three conclusively showed that a device-mimicking controller was better suited for controlling interventional neuroradiology robotics.


Assuntos
Procedimentos Endovasculares , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Cateterismo/métodos , Cateteres , Prolapso
7.
Br J Ophthalmol ; 2023 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-37380352

RESUMO

PURPOSE: To determine associations between deprivation using the Index of Multiple Deprivation (IMD and individual IMD subdomains) with incident referable diabetic retinopathy/maculopathy (termed rDR). METHODS: Anonymised demographic and screening data collected by the South-East London Diabetic Eye Screening Programme were extracted from September 2013 to December 2019. Multivariable Cox proportional models were used to explore the association between the IMD, IMD subdomains and rDR. RESULTS: From 118 508 people with diabetes who attended during the study period, 88 910 (75%) were eligible. The mean (± SD) age was 59.6 (±14.7) years; 53.94% were male, 52.58% identified as white, 94.28% had type 2 diabetes and the average duration of diabetes was 5.81 (±6.9) years; rDR occurred in 7113 patients (8.00%). Known risk factors of younger age, Black ethnicity, type 2 diabetes, more severe baseline DR and diabetes duration conferred a higher risk of incident rDR. After adjusting for these known risk factors, the multivariable analysis did not show a significant association between IMD (decile 1 vs decile 10) and rDR (HR: 1.08, 95% CI: 0.87 to 1.34, p=0.511). However, high deprivation (decile 1) in three IMD subdomains was associated with rDR, namely living environment (HR: 1.64, 95% CI: 1.12 to 2.41, p=0.011), education skills (HR: 1.64, 95% CI: 1.12 to 2.41, p=0.011) and income (HR: 1.19, 95% CI: 1.02 to 1.38, p=0.024). CONCLUSION: IMD subdomains allow for the detection of associations between aspects of deprivation and rDR, which may be missed when using the aggregate IMD. The generalisation of these findings outside the UK population requires corroboration internationally.

8.
IEEE Robot Autom Lett ; 8(2): 1005-1012, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36733442

RESUMO

Soft robots that grow through eversion/apical extension can effectively navigate fragile environments such as ducts and vessels inside the human body. This paper presents the physics-based model of a miniature steerable eversion growing robot. We demonstrate the robot's growing, steering, stiffening and interaction capabilities. The interaction between two robot-internal components is explored, i.e., a steerable catheter for robot tip orientation, and a growing sheath for robot elongation/retraction. The behavior of the growing robot under different inner pressures and external tip forces is investigated. Simulations are carried out within the SOFA framework. Extensive experimentation with a physical robot setup demonstrates agreement with the simulations. The comparison demonstrates a mean absolute error of 10 - 20% between simulation and experimental results for curvature values, including catheter-only experiments, sheath-only experiments and full system experiments. To our knowledge, this is the first work to explore physics-based modelling of a tendon-driven steerable eversion growing robot. While our work is motivated by early breast cancer detection through mammary duct inspection and uses our MAMMOBOT robot prototype, our approach is general and relevant to similar growing robots.

9.
Sci Rep ; 13(1): 1392, 2023 01 25.
Artigo em Inglês | MEDLINE | ID: mdl-36697482

RESUMO

Diabetic retinopathy (DR) at risk of vision loss (referable DR) needs to be identified by retinal screening and referred to an ophthalmologist. Existing automated algorithms have mostly been developed from images acquired with high cost mydriatic retinal cameras and cannot be applied in the settings used in most low- and middle-income countries. In this prospective multicentre study, we developed a deep learning system (DLS) that detects referable DR from retinal images acquired using handheld non-mydriatic fundus camera by non-technical field workers in 20 sites across India. Macula-centred and optic-disc-centred images from 16,247 eyes (9778 participants) were used to train and cross-validate the DLS and risk factor based logistic regression models. The DLS achieved an AUROC of 0.99 (1000 times bootstrapped 95% CI 0.98-0.99) using two-field retinal images, with 93.86 (91.34-96.08) sensitivity and 96.00 (94.68-98.09) specificity at the Youden's index operational point. With single field inputs, the DLS reached AUROC of 0.98 (0.98-0.98) for the macula field and 0.96 (0.95-0.98) for the optic-disc field. Intergrader performance was 90.01 (88.95-91.01) sensitivity and 96.09 (95.72-96.42) specificity. The image based DLS outperformed all risk factor-based models. This DLS demonstrated a clinically acceptable performance for the identification of referable DR despite challenging image capture conditions.


Assuntos
Aprendizado Profundo , Retinopatia Diabética , Diagnóstico por Imagem , Humanos , Diabetes Mellitus/patologia , Retinopatia Diabética/diagnóstico por imagem , Programas de Rastreamento/métodos , Midriáticos , Fotografação/métodos , Estudos Prospectivos , Retina/diagnóstico por imagem , Sensibilidade e Especificidade , Diagnóstico por Imagem/métodos
10.
Diabet Med ; 40(3): e14952, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36054221

RESUMO

AIM: To explore if novel non-invasive diagnostic technologies identify early small nerve fibre and retinal neurovascular pathology in prediabetes. METHODS: Participants with normoglycaemia, prediabetes or type 2 diabetes underwent an exploratory cross-sectional analysis with optical coherence tomography angiography (OCT-A), handheld electroretinography (ERG), corneal confocal microscopy (CCM) and evaluation of electrochemical skin conductance (ESC). RESULTS: Seventy-five participants with normoglycaemia (n = 20), prediabetes (n = 29) and type 2 diabetes (n = 26) were studied. Compared with normoglycaemia, mean peak ERG amplitudes of retinal responses at low (16-Td·s: 4.05 µV, 95% confidence interval [95% CI] 0.96-7.13) and high (32-Td·s: 5·20 µV, 95% CI 1.54-8.86) retinal illuminance were lower in prediabetes, as were OCT-A parafoveal vessel densities in superficial (0.051 pixels/mm2 , 95% CI 0.005-0.095) and deep (0.048 pixels/mm2 , 95% CI 0.003-0.093) retinal layers. There were no differences in CCM or ESC measurements between these two groups. Correlations between HbA1c and peak ERG amplitude at 32-Td·s (r = -0.256, p = 0.028), implicit time at 32-Td·s (r = 0.422, p < 0.001) and 16-Td·s (r = 0.327, p = 0.005), OCT parafoveal vessel density in the superficial (r = -0.238, p = 0.049) and deep (r = -0.3, p = 0.017) retinal layers, corneal nerve fibre length (CNFL) (r = -0.293, p = 0.017), and ESC-hands (r = -0.244, p = 0.035) were observed. HOMA-IR was a predictor of CNFD (ß = -0.94, 95% CI -1.66 to -0.21, p = 0.012) and CNBD (ß = -5.02, 95% CI -10.01 to -0.05, p = 0.048). CONCLUSIONS: The glucose threshold for the diagnosis of diabetes is based on emergent retinopathy on fundus examination. We show that both abnormal retinal neurovascular structure (OCT-A) and function (ERG) may precede retinopathy in prediabetes, which require confirmation in larger, adequately powered studies.


Assuntos
Diabetes Mellitus Tipo 2 , Estado Pré-Diabético , Doenças Retinianas , Humanos , Estado Pré-Diabético/diagnóstico , Diabetes Mellitus Tipo 2/diagnóstico , Estudos Transversais , Retina
11.
IEEE Trans Robot ; 39(6): 4500-4519, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38249319

RESUMO

Aortic valve surgery is the preferred procedure for replacing a damaged valve with an artificial one. The ValveTech robotic platform comprises a flexible articulated manipulator and surgical interface supporting the effective delivery of an artificial valve by teleoperation and endoscopic vision. This article presents our recent work on force-perceptive, safe, semiautonomous navigation of the ValveTech platform prior to valve implantation. First, we present a force observer that transfers forces from the manipulator body and tip to a haptic interface. Second, we demonstrate how hybrid forward/inverse mechanics, together with endoscopic visual servoing, lead to autonomous valve positioning. Benchtop experiments and an artificial phantom quantify the performance of the developed robot controller and navigator. Valves can be autonomously delivered with a 2.0±0.5 mm position error and a minimal misalignment of 3.4±0.9°. The hybrid force/shape observer (FSO) algorithm was able to predict distributed external forces on the articulated manipulator body with an average error of 0.09 N. FSO can also estimate loads on the tip with an average accuracy of 3.3%. The presented system can lead to better patient care, delivery outcome, and surgeon comfort during aortic valve surgery, without requiring sensorization of the robot tip, and therefore obviating miniaturization constraints.

12.
Sci Rep ; 12(1): 11196, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35778615

RESUMO

Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Macula Lutea , Retinopatia Diabética/diagnóstico por imagem , Humanos , Programas de Rastreamento/métodos , Retina/diagnóstico por imagem
13.
Int J Comput Assist Radiol Surg ; 17(5): 877-883, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35364774

RESUMO

PURPOSE: Intra-retinal delivery of novel sight-restoring therapies will require the precision of robotic systems accompanied by excellent visualisation of retinal layers. Intra-operative Optical Coherence Tomography (iOCT) provides cross-sectional retinal images in real time but at the cost of image quality that is insufficient for intra-retinal therapy delivery.This paper proposes a super-resolution methodology that improves iOCT image quality leveraging spatiotemporal consistency of incoming iOCT video streams. METHODS: To overcome the absence of ground truth high-resolution (HR) images, we first generate HR iOCT images by fusing spatially aligned iOCT video frames. Then, we automatically assess the quality of the HR images on key retinal layers using a deep semantic segmentation model. Finally, we use image-to-image translation models (Pix2Pix and CycleGAN) to enhance the quality of LR images via quality transfer from the estimated HR domain. RESULTS: Our proposed methodology generates iOCT images of improved quality according to both full-reference and no-reference metrics. A qualitative study with expert clinicians also confirms the improvement in the delineation of pertinent layers and in the reduction of artefacts. Furthermore, our approach outperforms conventional denoising filters and the learning-based state-of-the-art. CONCLUSIONS: The results indicate that the learning-based methods using the estimated, through our pipeline, HR domain can be used to enhance the iOCT image quality. Therefore, the proposed method can computationally augment the capabilities of iOCT imaging helping this modality support the vitreoretinal surgical interventions of the future.


Assuntos
Retina , Tomografia de Coerência Óptica , Estudos Transversais , Humanos , Retina/diagnóstico por imagem , Retina/cirurgia , Lâmpada de Fenda , Tomografia de Coerência Óptica/métodos
14.
EBioMedicine ; 76: 103868, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35172957

RESUMO

BACKGROUND: The manufacturing of any standard mechanical ventilator cannot rapidly be upscaled to several thousand units per week, largely due to supply chain limitations. The aim of this study was to design, verify and perform a pre-clinical evaluation of a mechanical ventilator based on components not required for standard ventilators, and that met the specifications provided by the Medicines and Healthcare Products Regulatory Agency (MHRA) for rapidly-manufactured ventilator systems (RMVS). METHODS: The design utilises closed-loop negative feedback control, with real-time monitoring and alarms. Using a standard test lung, we determined the difference between delivered and target tidal volume (VT) at respiratory rates between 20 and 29 breaths per minute, and the ventilator's ability to deliver consistent VT during continuous operation for >14 days (RMVS specification). Additionally, four anaesthetised domestic pigs (3 male-1 female) were studied before and after lung injury to provide evidence of the ventilator's functionality, and ability to support spontaneous breathing. FINDINGS: Continuous operation lasted 23 days, when the greatest difference between delivered and target VT was 10% at inspiratory flow rates >825 mL/s. In the pre-clinical evaluation, the VT difference was -1 (-90 to 88) mL [mean (LoA)], and positive end-expiratory pressure (PEEP) difference was -2 (-8 to 4) cmH2O. VT delivery being triggered by pressures below PEEP demonstrated spontaneous ventilation support. INTERPRETATION: The mechanical ventilator presented meets the MHRA therapy standards for RMVS and, being based on largely available components, can be manufactured at scale. FUNDING: Work supported by Wellcome/EPSRC Centre for Medical Engineering,King's Together Fund and Oxford University.


Assuntos
Desenho de Equipamento , Respiração Artificial/instrumentação , Animais , COVID-19/patologia , COVID-19/prevenção & controle , COVID-19/virologia , Feminino , Masculino , Taxa Respiratória , SARS-CoV-2/isolamento & purificação , Suínos , Volume de Ventilação Pulmonar
15.
J Clin Med ; 11(3)2022 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-35160065

RESUMO

Artificial Intelligence has showcased clear capabilities to automatically grade diabetic retinopathy (DR) on mydriatic retinal images captured by clinical experts on fixed table-top retinal cameras within hospital settings. However, in many low- and middle-income countries, screening for DR revolves around minimally trained field workers using handheld non-mydriatic cameras in community settings. This prospective study evaluated the diagnostic accuracy of a deep learning algorithm developed using mydriatic retinal images by the Singapore Eye Research Institute, commercially available as Zeiss VISUHEALTH-AI DR, on images captured by field workers on a Zeiss Visuscout® 100 non-mydriatic handheld camera from people with diabetes in a house-to-house cross-sectional study across 20 regions in India. A total of 20,489 patient eyes from 11,199 patients were used to evaluate algorithm performance in identifying referable DR, non-referable DR, and gradability. For each category, the algorithm achieved precision values of 29.60 (95% CI 27.40, 31.88), 92.56 (92.13, 92.97), and 58.58 (56.97, 60.19), recall values of 62.69 (59.17, 66.12), 85.65 (85.11, 86.18), and 65.06 (63.40, 66.69), and F-score values of 40.22 (38.25, 42.21), 88.97 (88.62, 89.31), and 61.65 (60.50, 62.80), respectively. Model performance reached 91.22 (90.79, 91.64) sensitivity and 65.06 (63.40, 66.69) specificity at detecting gradability and 72.08 (70.68, 73.46) sensitivity and 85.65 (85.11, 86.18) specificity for the detection of all referable eyes. Algorithm accuracy is dependent on the quality of acquired retinal images, and this is a major limiting step for its global implementation in community non-mydriatic DR screening using handheld cameras. This study highlights the need to develop and train deep learning-based screening tools in such conditions before implementation.

16.
Artigo em Inglês | MEDLINE | ID: mdl-38013837

RESUMO

Current laparoscopic camera motion automation relies on rule-based approaches or only focuses on surgical tools. Imitation Learning (IL) methods could alleviate these shortcomings, but have so far been applied to oversimplified setups. Instead of extracting actions from oversimplified setups, in this work we introduce a method that allows to extract a laparoscope holder's actions from videos of laparoscopic interventions. We synthetically add camera motion to a newly acquired dataset of camera motion free da Vinci surgery image sequences through a novel homography generation algorithm. The synthetic camera motion serves as a supervisory signal for camera motion estimation that is invariant to object and tool motion. We perform an extensive evaluation of state-of-the-art (SOTA) Deep Neural Networks (DNNs) across multiple compute regimes, finding our method transfers from our camera motion free da Vinci surgery dataset to videos of laparoscopic interventions, outperforming classical homography estimation approaches in both, precision by 41%, and runtime on a CPU by 43%.

17.
J Neurointerv Surg ; 14(6): 539-545, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34799439

RESUMO

BACKGROUND: Robotically performed neurointerventional surgery has the potential to reduce occupational hazards to staff, perform intervention with greater precision, and could be a viable solution for teleoperated neurointerventional procedures. OBJECTIVE: To determine the indication, robotic systems used, efficacy, safety, and the degree of manual assistance required for robotically performed neurointervention. METHODS: We conducted a systematic review of the literature up to, and including, articles published on April 12, 2021. Medline, PubMed, Embase, and Cochrane register databases were searched using medical subject heading terms to identify reports of robotically performed neurointervention, including diagnostic cerebral angiography and carotid artery intervention. RESULTS: A total of 8 articles treating 81 patients were included. Only one case report used a robotic system for intracranial intervention, the remaining indications being cerebral angiography and carotid artery intervention. Only one study performed a comparison of robotic and manual procedures. Across all studies, the technical success rate was 96% and the clinical success rate was 100%. All cases required a degree of manual assistance. No studies had clearly defined patient selection criteria, reference standards, or index tests, preventing meaningful statistical analysis. CONCLUSIONS: Given the clinical success, it is plausible that robotically performed neurointerventional procedures will eventually benefit patients and reduce occupational hazards for staff; however, there is no high-level efficacy and safety evidence to support this assertion. Limitations of current robotic systems and the challenges that must be overcome to realize the potential for remote teleoperated neurointervention require further investigation.


Assuntos
Robótica , Angiografia Cerebral , Humanos , Procedimentos Cirúrgicos Vasculares
18.
Front Robot AI ; 8: 752290, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34869614

RESUMO

This paper presents a multi-purpose gripping and incision tool-set to reduce the number of required manipulators for targeted therapeutics delivery in Minimally Invasive Surgery. We have recently proposed the use of multi-arm Concentric Tube Robots (CTR) consisting of an incision, a camera, and a gripper manipulator for deep orbital interventions, with a focus on Optic Nerve Sheath Fenestration (ONSF). The proposed prototype in this research, called Gripe-Needle, is a needle equipped with a sticky suction cup gripper capable of performing both gripping of target tissue and incision tasks in the optic nerve area by exploiting the multi-tube arrangement of a CTR for actuation of the different tool-set units. As a result, there will be no need for an independent gripper arm for an incision task. The CTR innermost tube is equipped with a needle, providing the pathway for drug delivery, and the immediate outer tube is attached to the suction cup, providing the suction pathway. Based on experiments on various materials, we observed that adding a sticky surface with bio-inspired grooves to a normal suction cup gripper has many advantages such as, 1) enhanced adhesion through material stickiness and by air-tightening the contact surface, 2) maintained adhesion despite internal pressure variations, e.g. due to the needle motion, and 3) sliding resistance. Simple Finite Element and theoretical modeling frameworks are proposed, based on which a miniature tool-set is designed to achieve the required gripping forces during ONSF. The final designs were successfully tested for accessing the optic nerve of a realistic eye phantom in a skull eye orbit, robust gripping and incision on units of a plastic bubble wrap sample, and manipulating different tissue types of porcine eye samples.

19.
Sci Rep ; 11(1): 9469, 2021 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-33947946

RESUMO

Screening effectively identifies patients at risk of sight-threatening diabetic retinopathy (STDR) when retinal images are captured through dilated pupils. Pharmacological mydriasis is not logistically feasible in non-clinical, community DR screening, where acquiring gradable retinal images using handheld devices exhibits high technical failure rates, reducing STDR detection. Deep learning (DL) based gradability predictions at acquisition could prompt device operators to recapture insufficient quality images, increasing gradable image proportions and consequently STDR detection. Non-mydriatic retinal images were captured as part of SMART India, a cross-sectional, multi-site, community-based, house-to-house DR screening study between August 2018 and December 2019 using the Zeiss Visuscout 100 handheld camera. From 18,277 patient eyes (40,126 images), 16,170 patient eyes (35,319 images) were eligible and 3261 retinal images (1490 patient eyes) were sampled then labelled by two ophthalmologists. Compact DL model area under the receiver operator characteristic curve was 0.93 (0.01) following five-fold cross-validation. Compact DL model agreement (Kappa) were 0.58, 0.69 and 0.69 for high specificity, balanced sensitivity/specificity and high sensitivity operating points compared to an inter-grader agreement of 0.59. Compact DL gradability model performance was favourable compared to ophthalmologists. Compact DL models can effectively classify non-mydriatic, handheld retinal image gradability with potential applications within community-based DR screening.


Assuntos
Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/diagnóstico , Retina/diagnóstico por imagem , Estudos Transversais , Aprendizado Profundo , Feminino , Humanos , Índia , Masculino , Programas de Rastreamento/métodos , Pessoa de Meia-Idade , Midriáticos/administração & dosagem , Fotografação/métodos , Curva ROC , Sensibilidade e Especificidade
20.
Front Robot AI ; 8: 611866, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34012980

RESUMO

In this paper, we design and develop a novel robotic bronchoscope for sampling of the distal lung in mechanically-ventilated (MV) patients in critical care units. Despite the high cost and attributable morbidity and mortality of MV patients with pneumonia which approaches 40%, sampling of the distal lung in MV patients suffering from range of lung diseases such as Covid-19 is not standardised, lacks reproducibility and requires expert operators. We propose a robotic bronchoscope that enables repeatable sampling and guidance to distal lung pathologies by overcoming significant challenges that are encountered whilst performing bronchoscopy in MV patients, namely, limited dexterity, large size of the bronchoscope obstructing ventilation, and poor anatomical registration. We have developed a robotic bronchoscope with 7 Degrees of Freedom (DoFs), an outer diameter of 4.5 mm and inner working channel of 2 mm. The prototype is a push/pull actuated continuum robot capable of dexterous manipulation inside the lung and visualisation/sampling of the distal airways. A prototype of the robot is engineered and a mechanics-based model of the robotic bronchoscope is developed. Furthermore, we develop a novel numerical solver that improves the computational efficiency of the model and facilitates the deployment of the robot. Experiments are performed to verify the design and evaluate accuracy and computational cost of the model. Results demonstrate that the model can predict the shape of the robot in <0.011s with a mean error of 1.76 cm, enabling the future deployment of a robotic bronchoscope in MV patients.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...